32 research outputs found

    Mavericks and Lotteries

    Get PDF
    In 2013 the Health Research Council of New Zealand began a stream of funding titled 'Explorer Grants', and in 2017 changes were introduced to the funding mechanisms of the Volkswagen Foundation 'Experiment!' and the New Zealand Science for Technological Innovation challenge 'Seed Projects'. All three funding streams aim at encouraging novel scientific ideas, and all now employ random selection by lottery as part of the grant selection process. The idea of funding science by lottery has emerged independently in several corners of academia, including in philosophy of science. This paper reviews the conceptual and institutional landscape in which this policy proposal emerged, how different academic fields presented and supported arguments for the proposal, and how these have been reflected (or not) in actual policy. The paper presents an analytical synthesis of the arguments presented to date, notes how they support each other and shape policy recommendations in various ways, and where competing arguments highlight need for further analysis or more data. In addition, it provides lessons for how philosophers of science can engage in shaping science policy, and in particular highlights the importance of mixing complementary expertise: it takes a (conceptually diverse) village to raise (good) policy

    Mavericks and lotteries.

    Get PDF
    In 2013 the Health Research Council of New Zealand began a stream of funding titled 'Explorer Grants', and in 2017 changes were introduced to the funding mechanisms of the Volkswagen Foundation 'Experiment!' and the New Zealand Science for Technological Innovation challenge 'Seed Projects'. All three funding streams aim at encouraging novel scientific ideas, and all now employ random selection by lottery as part of the grant selection process. The idea of funding science by lottery emerged independently in several corners of academia, including in philosophy of science. This paper reviews the conceptual and institutional landscape in which this policy proposal emerged, how different academic fields presented and supported arguments for the proposal, and how these have been reflected (or not) in actual policy. The paper presents an analytical synthesis of the arguments presented to date, notes how they support each other and shape policy recommendations in various ways, and where competing arguments highlight the need for further analysis or more data. In addition, it provides lessons for how philosophers of science can engage in shaping science policy, and in particular, highlights the importance of mixing complementary expertise: it takes a (conceptually diverse) village to raise (good) policy.Templeton World Charity Foundatio

    Autonomy and machine learning at the interface of nuclear weapons, computers and people

    Get PDF
    A new era for our species started in 1945: with the terrifying demonstration of the power of the atom bomb in Hiroshima and Nagasaki, Japan, the potential global catastrophic consequences of human technology could no longer be ignored. Within the field of global catastrophic and existential risk, nuclear war is one of the more iconic scenarios, although significant uncertainties remain about its likelihood and potential destructive magnitude. The risk posed to humanity from nuclear weapons is not static. In tandem with geopolitical and cultural changes, technological innovations could have a significant impact on how the risk of the use of nuclear weapons changes over time. Increasing attention has been given in the literature to the impact of digital technologies, and in particular autonomy and machine learning, on nuclear risk. Most of this attention has focused on ‘first-order’ effects: the introduction of technologies into nuclear command-and-control and weapon-delivery systems. This essay focuses instead on higher-order effects: those that stem from the introduction of such technologies into more peripheral systems, with a more indirect (but no less real) effect on nuclear risk. It first describes and categorizes the new threats introduced by these technologies (in section I). It then considers policy responses to address these new threats (section II)

    Centralised Funding and the Division of Cognitive Labour

    Get PDF
    Project selection by funding bodies directly influences the division of cognitive labour in scientific communities. I present a novel adaptation of an existing agent-based model of scientific research, in which a central funding body selects from proposed projects located on an epistemic landscape. I simulate four different selection strategies: selection based on a god's-eye perspective of project significance, selection based on past success, selection based on past funding, and random selection. Results show the size of the landscape matters: on small landscapes historical information leads to slightly better results than random selection, but on large landscapes random selection greatly outperforms historically-informed selection

    Mavericks and Lotteries

    Get PDF
    In 2013 the Health Research Council of New Zealand began a stream of funding titled 'Explorer Grants', and in 2017 changes were introduced to the funding mechanisms of the Volkswagen Foundation 'Experiment!' and the New Zealand Science for Technological Innovation challenge 'Seed Projects'. All three funding streams aim at encouraging novel scientific ideas, and all now employ random selection by lottery as part of the grant selection process. The idea of funding science by lottery has emerged independently in several corners of academia, including in philosophy of science. This paper reviews the conceptual and institutional landscape in which this policy proposal emerged, how different academic fields presented and supported arguments for the proposal, and how these have been reflected (or not) in actual policy. The paper presents an analytical synthesis of the arguments presented to date, notes how they support each other and shape policy recommendations in various ways, and where competing arguments highlight need for further analysis or more data. In addition, it provides lessons for how philosophers of science can engage in shaping science policy, and in particular highlights the importance of mixing complementary expertise: it takes a (conceptually diverse) village to raise (good) policy

    AI Systems of Concern

    Full text link
    Concerns around future dangers from advanced AI often centre on systems hypothesised to have intrinsic characteristics such as agent-like behaviour, strategic awareness, and long-range planning. We label this cluster of characteristics as "Property X". Most present AI systems are low in "Property X"; however, in the absence of deliberate steering, current research directions may rapidly lead to the emergence of highly capable AI systems that are also high in "Property X". We argue that "Property X" characteristics are intrinsically dangerous, and when combined with greater capabilities will result in AI systems for which safety and control is difficult to guarantee. Drawing on several scholars' alternative frameworks for possible AI research trajectories, we argue that most of the proposed benefits of advanced AI can be obtained by systems designed to minimise this property. We then propose indicators and governance interventions to identify and limit the development of systems with risky "Property X" characteristics.Comment: 9 pages, 1 figure, 2 table
    corecore